11 research outputs found

    An Explainable AI System for Automated COVID-19 Assessment and Lesion Categorization from CT-scans

    Full text link
    COVID-19 infection caused by SARS-CoV-2 pathogen is a catastrophic pandemic outbreak all over the world with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at identifying automatically lung parenchyma and lobes. Next, we combined such segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the obtained classification results with those obtained by three expert radiologists on a dataset consisting of 162 CT scans. Results showed a sensitivity of 90\% and a specificity of 93.5% for COVID-19 detection, outperforming those yielded by the expert radiologists, and an average lesion categorization accuracy of over 84%. Results also show that a significant role is played by prior lung and lobe segmentation that allowed us to enhance performance by over 20 percent points. The interpretation of the trained AI models, moreover, reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai

    Brain2Image: Converting Brain Signals Into Images

    No full text
    Reading the human mind has been a hot topic in the last decades, and recent research in neuroscience has found evidence on the possibility of decoding, from neuroimaging data, how the human brain works. At the same time, the recent rediscovery of deep learning combined to the large interest of scientific community on generative methods has enabled the generation of realistic images by learning a data distribution from noise. The quality of generated images increases when the input data conveys information on visual content of images. Leveraging on these recent trends, in this paper we present an approach for generating images using visually-evoked brain signals recorded through an electroencephalograph (EEG). More specifically, we recorded EEG data from several subjects while observing images on a screen and tried to regenerate the seen images. To achieve this goal, we developed a deep-learning framework consisting of an LSTM stacked with a generative method, which learns a more compact and noise-free representation of EEG data and employs it to generate the visual stimuli evoking specific brain responses. Our Brain2Image approach was trained and tested using EEG data from six subjects while they were looking at images from 40 ImageNet classes. As generative models, we compared variational autoencoders (VAE) and generative adversarial networks (GAN). The results show that, indeed, our approach is able to generate an image drawn from the same distribution of the shown images. Furthermore, GAN, despite generating less realistic images, show better performance than VAE, especially as concern sharpness. The obtained performance provides useful hints on the fact that EEG contains patterns related to visual content and that such patterns can be used to effectively generate images that are semantically coherent to the evoking visual stimuli
    corecore